187 research outputs found
Understanding Image Virality
Virality of online content on social networking websites is an important but
esoteric phenomenon often studied in fields like marketing, psychology and data
mining. In this paper we study viral images from a computer vision perspective.
We introduce three new image datasets from Reddit, and define a virality score
using Reddit metadata. We train classifiers with state-of-the-art image
features to predict virality of individual images, relative virality in pairs
of images, and the dominant topic of a viral image. We also compare machine
performance to human performance on these tasks. We find that computers perform
poorly with low level features, and high level information is critical for
predicting virality. We encode semantic information through relative
attributes. We identify the 5 key visual attributes that correlate with
virality. We create an attribute-based characterization of images that can
predict relative virality with 68.10% accuracy (SVM+Deep Relative Attributes)
-- better than humans at 60.12%. Finally, we study how human prediction of
image virality varies with different `contexts' in which the images are viewed,
such as the influence of neighbouring images, images recently viewed, as well
as the image title or caption. This work is a first step in understanding the
complex but important phenomenon of image virality. Our datasets and
annotations will be made publicly available.Comment: Pre-print, IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 201
Don't Just Listen, Use Your Imagination: Leveraging Visual Common Sense for Non-Visual Tasks
Artificial agents today can answer factual questions. But they fall short on
questions that require common sense reasoning. Perhaps this is because most
existing common sense databases rely on text to learn and represent knowledge.
But much of common sense knowledge is unwritten - partly because it tends not
to be interesting enough to talk about, and partly because some common sense is
unnatural to articulate in text. While unwritten, it is not unseen. In this
paper we leverage semantic common sense knowledge learned from images - i.e.
visual common sense - in two textual tasks: fill-in-the-blank and visual
paraphrasing. We propose to "imagine" the scene behind the text, and leverage
visual cues from the "imagined" scenes in addition to textual cues while
answering these questions. We imagine the scenes as a visual abstraction. Our
approach outperforms a strong text-only baseline on these tasks. Our proposed
tasks can serve as benchmarks to quantitatively evaluate progress in solving
tasks that go "beyond recognition". Our code and datasets are publicly
available
Punny Captions: Witty Wordplay in Image Descriptions
Wit is a form of rich interaction that is often grounded in a specific
situation (e.g., a comment in response to an event). In this work, we attempt
to build computational models that can produce witty descriptions for a given
image. Inspired by a cognitive account of humor appreciation, we employ
linguistic wordplay, specifically puns, in image descriptions. We develop two
approaches which involve retrieving witty descriptions for a given image from a
large corpus of sentences, or generating them via an encoder-decoder neural
network architecture. We compare our approach against meaningful baseline
approaches via human studies and show substantial improvements. We find that
when a human is subject to similar constraints as the model regarding word
usage and style, people vote the image descriptions generated by our model to
be slightly wittier than human-written witty descriptions. Unsurprisingly,
humans are almost always wittier than the model when they are free to choose
the vocabulary, style, etc.Comment: NAACL 2018 (11 pages
Analyzing the Behavior of Visual Question Answering Models
Recently, a number of deep-learning based models have been proposed for the
task of Visual Question Answering (VQA). The performance of most models is
clustered around 60-70%. In this paper we propose systematic methods to analyze
the behavior of these models as a first step towards recognizing their
strengths and weaknesses, and identifying the most fruitful directions for
progress. We analyze two models, one each from two major classes of VQA models
-- with-attention and without-attention and show the similarities and
differences in the behavior of these models. We also analyze the winning entry
of the VQA Challenge 2016.
Our behavior analysis reveals that despite recent progress, today's VQA
models are "myopic" (tend to fail on sufficiently novel instances), often "jump
to conclusions" (converge on a predicted answer after 'listening' to just half
the question), and are "stubborn" (do not change their answers across images).Comment: 13 pages, 20 figures; To appear in EMNLP 201
Don't Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering
A number of studies have found that today's Visual Question Answering (VQA)
models are heavily driven by superficial correlations in the training data and
lack sufficient image grounding. To encourage development of models geared
towards the latter, we propose a new setting for VQA where for every question
type, train and test sets have different prior distributions of answers.
Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we
call Visual Question Answering under Changing Priors (VQA-CP v1 and VQA-CP v2
respectively). First, we evaluate several existing VQA models under this new
setting and show that their performance degrades significantly compared to the
original VQA setting. Second, we propose a novel Grounded Visual Question
Answering model (GVQA) that contains inductive biases and restrictions in the
architecture specifically designed to prevent the model from 'cheating' by
primarily relying on priors in the training data. Specifically, GVQA explicitly
disentangles the recognition of visual concepts present in the image from the
identification of plausible answer space for a given question, enabling the
model to more robustly generalize across different distributions of answers.
GVQA is built off an existing VQA model -- Stacked Attention Networks (SAN).
Our experiments demonstrate that GVQA significantly outperforms SAN on both
VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more
powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in
several cases. GVQA offers strengths complementary to SAN when trained and
evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more
transparent and interpretable than existing VQA models.Comment: 15 pages, 10 figures. To appear in IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 201
- …